16 research outputs found

    Temporal mixture ensemble models for probabilistic forecasting of intraday cryptocurrency volume

    Get PDF
    We study the problem of the intraday short-term volume forecasting in cryptocurrency multi-markets. The predictions are built by using transaction and order book data from different markets where the exchange takes place. Methodologically, we propose a temporal mixture ensemble, capable of adaptively exploiting, for the forecasting, different sources of data and providing a volume point estimate, as well as its uncertainty. We provide evidence of the clear outperformance of our model with respect to econometric models. Moreover our model performs slightly better than Gradient Boosting Machine while having a much clearer interpretability of the results. Finally, we show that the above results are robust also when restricting the prediction analysis to each volume quartile

    Building a multisystemic understanding of societal resilience to the COVID-19 pandemic

    Get PDF
    The current global systemic crisis reveals how globalised societies are unprepared to face a pandemic. Beyond the dramatic loss of human life, the COVID-19 pandemic has triggered widespread disturbances in health, social, economic, environmental and governance systems in many countries across the world. Resilience describes the capacities of natural and human systems to prevent, react to and recover from shocks. Societal resilience to the current COVID-19 pandemic relates to the ability of societies in maintaining their core functions while minimising the impact of the pandemic and other societal effects. Drawing on the emerging evidence about resilience in health, social, economic, environmental and governance systems, this paper delineates a multisystemic understanding of societal resilience to COVID-19. Such an understanding provides the foundation for an integrated approach to build societal resilience to current and future pandemics

    Ensemble approach for generalized network dismantling

    Full text link
    Finding a set of nodes in a network, whose removal fragments the network below some target size at minimal cost is called network dismantling problem and it belongs to the NP-hard computational class. In this paper, we explore the (generalized) network dismantling problem by exploring the spectral approximation with the variant of the power-iteration method. In particular, we explore the network dismantling solution landscape by creating the ensemble of possible solutions from different initial conditions and a different number of iterations of the spectral approximation.Comment: 11 Pages, 4 Figures, 4 Table

    Modern temporal network theory: A colloquium

    Full text link
    The power of any kind of network approach lies in the ability to simplify a complex system so that one can better understand its function as a whole. Sometimes it is beneficial, however, to include more information than in a simple graph of only nodes and links. Adding information about times of interactions can make predictions and mechanistic understanding more accurate. The drawback, however, is that there are not so many methods available, partly because temporal networks is a relatively young field, partly because it more difficult to develop such methods compared to for static networks. In this colloquium, we review the methods to analyze and model temporal networks and processes taking place on them, focusing mainly on the last three years. This includes the spreading of infectious disease, opinions, rumors, in social networks; information packets in computer networks; various types of signaling in biology, and more. We also discuss future directions.Comment: Final accepted versio

    Predicting bankruptcy of local government: A machine learning approach

    No full text
    In this paper we analyze the predictability of the bankruptcy of 7795 Italian municipalities in the period 2009–2016. The prediction task is extremely hard due to the small number of bankruptcy cases, on which learning is possible. Besides historical financial data for each municipality, we use alternative institutional data along with the socio-demographic and economic context. The predictability is analyzed through the performance of the statistical and machine learning models with a receiver operating characteristic curve and the precision-recall curve. Our results suggest that it is possible to make out-of-sample predictions with a high true positive rate and low false-positive rate. The model shows that some non-financial features (e.g. geographical area) are more important than many financial features to predict the default of municipalities. Among financial indicators, the important features are mainly connected to the Deficit and the Debt of Municipalities. Among the socio-demographic characteristics of administrators, the gender and the age of members in council are among the top 10 features in terms of importance for predicting municipal defaults

    On the impact of publicly available news and information transfer to financial markets

    No full text
    We quantify the propagation and absorption of large-scale publicly available news articles from the World Wide Web to financial markets. To extract publicly available information, we use the news archives from the Common Crawl, a non-profit organization that crawls a large part of the web. We develop a processing pipeline to identify news articles associated with the constituent companies in the S&P 500 index, an equity market index that measures the stock performance of US companies. Using machine learning techniques, we extract sentiment scores from the Common Crawl News data and employ tools from information theory to quantify the information transfer from public news articles to the US stock market. Furthermore, we analyse and quantify the economic significance of the news-based information with a simple sentiment-based portfolio trading strategy. Our findings provide support for that information in publicly available news on the World Wide Web has a statistically and economically significant impact on events in financial markets.ISSN:2054-570

    Governance in the age of complexity: building resilience to COVID-19 and future pandemics

    Full text link
    This policy brief aims to promote a holistic mindset about the COVID-19 pandemic by 1) applying a complexity lens to understand its drivers, nature, and impact, 2) proposing actions to build resilient societies to pandemics, and 3) deriving principles to govern complex systemic crises. Building resilience to prevent, react to, and recover from systemic shocks need to become a core element of how societies are governed. This requires an integrated approach between health, social, economic, environmental, and institutional systems. The brief has been developed by a team of researchers coming from both the natural and social sciences.1 Reviewed by a group of policy actors,2 the brief aims to foster a dialogue between academic institutions and policymakers

    Maximizing the Likelihood of Detecting Outbreaks in Temporal Networks

    Full text link
    Epidemic spreading occurs among animals, humans, or computers and causes substantial societal, personal, or economic losses if left undetected. Based on known temporal contact networks, we propose an outbreak detection method that identifies a small set of nodes such that the likelihood of detecting recent outbreaks is maximal. The two-step procedure involves i) simulating spreading scenarios from all possible seed configurations and ii) greedily selecting nodes for monitoring in order to maximize the detection likelihood. We find that the detection likelihood is a submodular set function for which it has been proven that greedy optimization attains at least 63% of the optimal (intractable) solution. The results show that the proposed method detects more outbreaks than benchmark methods suggested recently and is robust against badly chosen parameters. In addition, our method can be used for out- break source detection. A limitation of this method is its heavy use of computational resources. However, for large graphs the method could be easily parallelized
    corecore